The integration of large language models (LLMs) like ChatGPT is transforming education the health sciences. This study evaluated the applicability of ChatGPT-4 and ChatGPT-4o in endodontics, focusing on their reliability and repeatability in responding to practitioner-level questions. Thirty closed-clinical questions, based on international guidelines, were each submitted thirty times to both models, generating a total of 1800 responses. These responses were evaluated by endodontic experts using a 3-point Likert scale. ChatGPT-4 achieved a reliability score of 52.67%, while ChatGPT-4o slightly outperformed it with 55.22%. Notably, ChatGPT-4o demonstrated greater response consistency, showing superior repeatability metrics such as Gwet’s AC1 and percentage agreement. While both models show promise in supporting learning, ChatGPT-4o may provide more consistent and pedagogically coherent feedback, particularly in contexts where response dependability is essential. From an educational standpoint, the findings support ChatGPT’s potential as a complementary tool for guided study or formative assessment in dentistry. However, due to moderate reliability, unsupervised use in specialized or clinically relevant contexts is not recommended. These insights are valuable for educators and instructional designers seeking to integrate AI into digital pedagogy. Further research should examine the performance of LLMs across diverse disciplines and formats to better define their role in AI-enhanced education.
Loading....